![]() METHOD AND SYSTEM FOR AUTODETERMINING THE VALUES OF INTRINSIC PARAMETERS AND EXTRINSIC PARAMETERS OF
专利摘要:
The present invention relates to a method of self-determination of intrinsic parameter values and extrinsic parameters of a camera placed at the edge of a roadway. This method is characterized in that it comprises: a step E10 of detecting a vehicle passing in front of the camera, a determination step E20, from at least one 2D image taken by the camera of the detected vehicle, and of at least one predetermined vehicle 3D model, intrinsic and extrinsic camera parameters relative to the reference mark of the predetermined vehicle 3D model (s) so that a projection of said one or of said predetermined vehicle 3D models corresponds to one or more of the 2D images actually taken by said camera. The present invention relates to a method for determining at least one physical quantity related to the positioning of said camera with respect to said roadway. It also relates to systems provided for implementing said methods. Finally, it relates to computer programs for implementing said methods. 公开号:FR3023949A1 申请号:FR1456767 申请日:2014-07-15 公开日:2016-01-22 发明作者:Alain Rouh;Jean Beaudet;Laurent Rostaing 申请人:Morpho SA; IPC主号:
专利说明:
[0001] The present invention relates to a method of self-determination of intrinsic parameter values and extrinsic parameters of a camera placed at the edge of a roadway. It also relates to a method for determining at least one physical quantity related to the positioning of said camera with respect to said roadway. It also relates to systems provided for implementing said methods. Finally, it relates to computer programs for the implementation of said methods. In FIG. 1, there is shown a camera 10 placed at the edge of a roadway 20 on which rolls an automobile 30 passing in front of the camera 10. The road 20 and the car 30 constitute a scene. It is shown on the right of this FIG. 1, the 2D image 40 which is taken by the camera 10 at a given moment. Throughout the following description, the camera 10 is considered to be isolated, but it will be understood that, according to the invention, it could be part of a multi-camera shooting system, for example two cameras then forming a system. stereoscopic shooting. A simplified model of a camera such as the camera 10, very commonly used in the present field of the art, considers it to be a pinhole allowing a so-called perspective projection of the points Pi of the vehicle 30 on the image plane 40. Thus, the relationship between the coordinates (x, y, z) of a point Pi of the vehicle 30 and the coordinates (u, v) of the corresponding point pi of the 2D image 40 can be written as in so-called homogeneous coordinates: (u v = [1 4] Y = K [R T1 Y Ll Ll j where X is an arbitrary scalar The matrix [M] is a 3x4 perspective projection matrix that can be decomposed in a 3x4 [RT] positioning matrix and a 3x3 calibration matrix [K] The calibration matrix [K] is defined by the focal lengths α and α of the camera in terms of pixel dimensions along the axes u and v of the image 40 as well as by the coordinates uo and vo of the origin point of the 2D image 40: [I (Ctv vo 1 As for the matrix this of positioning [RT], it is composed of a 3x3 rotation matrix and a 3-dimensional translation vector T which define by their respective components the positioning (distance, orientation) of the reference of the scene with respect to the camera 10. For further details, refer to the model just described in R. Hartley and A. Zisserman's book "Multiple View Geometry in Computer Vision" published by Cambridge University Press and in particular in Chapter 6 of this book. [0002] In general, the coefficients of the calibration matrix [K] are intrinsic parameters of the concerned camera whereas those of the positioning matrix [R T] are extrinsic parameters. Thus, in the patent application US20100283856, a vehicle is used to perform the calibration of the camera, it being understood that the calibration in question is the determination of the projection matrix [M]. The vehicle in question has markers whose relative positions are known. As the vehicle passes the camera, a 2D image is taken in one place and another 2D image in a second place. It is the images of these markers in each of the 2D images that are used to calculate the projection matrix [M]. [0003] The aim of the present invention is to propose a method of self-determination of the values of intrinsic parameters and extrinsic parameters of a camera placed at the edge of a roadway. "Self-determination" means that the system is capable, without the implementation of a particular procedure for the measurement and / or use of a carrier vehicle with markers, such as that used by the control system. US20100283856, by the sole implementation of this self-determination method, to determine the values of all or part of the parameters of the projection matrix [M]. To this end, the present invention relates to a method of self-determination of the values of intrinsic parameters and extrinsic parameters of a camera placed at the edge of a roadway which is characterized in that it comprises: a detection step of a vehicle passing in front of the camera; - a step of determining, from at least one 2D image taken by the detected vehicle's camera and at least one predetermined vehicle 3D model, intrinsic and extrinsic parameters of the camera by reference to the reference mark of the predetermined vehicle 3D model or models so that a projection of said one or of said predetermined vehicle 3D models corresponds to said one or more of the 2D images actually taken by said camera. [0004] The present invention further relates to a method for determining at least one physical quantity related to the positioning of a camera placed at the edge of a roadway. This method is characterized in that it comprises: a step of determining the values of the intrinsic parameters and the extrinsic parameters of said camera by the implementation of the self-determination method which has just been described, a step of establishing, from said parameter values, the camera positioning matrix, a step of calculating the matrix of the inverse transformation, and a deduction step, from said positioning matrix and the matrix of the inverse transformation of the or each of said physical quantities, each physical quantity being one of the following quantities: the height of the camera with respect to the road, the distance of said camera from the recognized vehicle, the direction of the road relative to the camera, - the equation of the road with respect to the camera. The present invention also relates to a system for self-determination of the values of the intrinsic parameters and the extrinsic parameters of a camera placed at the edge of a roadway which is characterized in that it comprises: means for detecting a vehicle passing in front of the road camera, means for determining, from at least one 2D image taken by the camera of the detected vehicle and at least one predetermined vehicle 3D model, intrinsic and extrinsic parameters of the camera relative to the reference mark of the predetermined vehicle 3D model or models so that a projection of said one or of said predetermined vehicle 3D models corresponds to said one or more of the 2D images actually taken by said camera. Finally, it relates to computer programs for implementing the methods just described. [0005] The characteristics of the invention mentioned below, as well as others, will appear more clearly on reading the following description of exemplary embodiments, said description being given in relation to the attached drawings, among which: FIG. 1 is a view of a scene of a vehicle passing in front of a camera connected to an image processing system for carrying out the method of the invention, FIG. 2a is a diagram illustrating the method of self-determination of the absolute values of intrinsic parameters and extrinsic parameters of a camera according to a first embodiment of the invention, FIG. 2b is a diagram illustrating the method of self-determination of the absolute values of intrinsic parameters and extrinsic parameters of a camera according to a second embodiment of the present invention, FIG. 3 is a diagram illustrating a step of the self-determination method of the invention according to a first embodiment, FIG. 4 is a diagram illustrating the same step of the invention self-determination method according to a second embodiment, FIG. 5 is a diagram illustrating a method of determining at least one physical quantity related to the positioning of a camera with respect to the roadway, and FIG. 6 is a block diagram of an image processing system for carrying out the method of the invention. The method of self-determination of the intrinsic and extrinsic parameters of a camera 10 (see Fig. 1) of the present invention is implemented in an image processing unit 50 intended to receive the 2D images, taken by the camera 10 In a first embodiment of the invention shown in FIG. 2a, the first step El0 is a step of detecting a vehicle 30 passing in front of the camera 10. For example, this detection is performed from an image taken by the camera 10 or images of a sequence of 2D images 100 taken by the camera 10, the detection then being completed by a tracking process (tracking in English terminology). A process such as that described for the detection of license plates in Louka Dlagnekov's thesis at the University of California, San Diego, entitled "Video-based Car Surveillance: License Plate, Make and Model Recognition" published in 2005 can thus be used for this step E10. [0006] A second process step E20 is a step of determining, from at least one 2D image 100, taken by the camera 10, the vehicle detected in the step El0 and at least one predetermined vehicle 3D model 200 among a set of predetermined vehicle 3D models of different categories (e.g., different vehicle models of different makes), intrinsic and extrinsic parameters of the camera 10 relative to the reference mark of the predetermined vehicle 3D model (s) 200 a projection by the camera 10 of said or one of said predetermined vehicle 3D models 200 corresponds to said one or more 2D images 100 actually taken by said camera 10. [0007] According to the terminology of the present description, a predetermined vehicle 3D model is a set of points Qk of coordinates (x, y, z) in a particular reference frame, called reference reference. For example, the x-axis X of this reference mark is a transverse axis of the vehicle, the y-axis Y is the vertical axis of the vehicle and the depth axis Z is the longitudinal axis of the vehicle. As for the origin 0 of this reference mark, it is for example the projection along the y-axis Y of the barycentre of said vehicle on a plane parallel to the plane (X, Z) and tangent to the lower part of the vehicle wheels. normally in contact with the ground. The or each predetermined vehicle 3D model is for example stored in a database 51 of the unit 50, in FIG. 1. [0008] In order to limit the number of predetermined vehicle 3D models to be used in step E20, a second embodiment shown in FIG. 2b of the self-determination method also comprises: a step El 1 of recognition, from a 2D image or from at least one image of a sequence of 2D images taken by the camera 10, of at least one vehicle characteristic of the vehicle detected at the detection step E10, and a step E12 of association with said one or more vehicle characteristics recognized in the step E 1 1, at least one predetermined vehicle 3D model 200. the predetermined vehicle 3D models {Qk} which are considered in the determination step E20 are then the predetermined vehicle 3D model or models which have been associated, in step E12, with the vehicle characteristic (s) recognized by the Ell step. The vehicle feature in question here may be related to a particular vehicle (the car registered xxxx), to a particular vehicle model (Simca Plein Ciel vehicles), or a set of vehicle models (vehicles Peugeot® brands, all models combined). The characteristic or characteristics of the vehicle that can be used are, for example, Scale invariant feature transform (SIFT) features presented in Lowe's article, David G. entitled "Distinctive Image Features From Scale-Invariant Keypoints" published in International journal of computer vision 60.2 (2004) p91-110, Speed Up Robust Features (SURFs) presented in Herbert Bay, Tinne Tuytelaars and Luc Van Gool's "SURF: Speeded Up Robust Features" and published in 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006, shape descriptor, etc. These characteristics can also be related to the appearance of said vehicle (so-called Eigenface or Eigencar vectors). Thus, step E1 of the method of the invention can implement a method which is generally called a "Make and Model Recognition" method. method, it is possible to refer to Louka Dlagnekov's thesis already mentioned above.The characteristic in question can also be a characteristic which uniquely identifies a particular vehicle, for example a registration number on the plate of This is the registration number of this vehicle, Stage 1. It is to recognize this registration number, and Louka Dlagnokov's thesis, which has already been mentioned, also describes license plate recognition methods Two embodiments are envisaged for the implementation. of step E20 of the self-determination method of the invention described above in relation to Figs 2a and 2b.The first of these mo 3. In a first substep E21, from at least two 2D images 100 of a sequence of 2D images taken by the camera 10 at different times t0 to tn while the vehicle is in motion. 30 detected in the step El0 passes in front of the camera 10, is established a 3D model of said vehicle 30. The 3D model in question is a model that corresponds to the vehicle 30 which is actually in front of the camera 10, unlike the 3D model of predetermined vehicle. Such a 3D model of the vehicle 30 is a set of points Pi of coordinates taken in a reference linked to the camera which, projected by the camera 10 at an arbitrary time, for example at the time t 0, form a set of points pi0 in an image 2D, denoted I0, formed by the camera 10. At a time tj, the vehicle has moved with respect to the time t0, but for the camera 10, it has undergone a matrix rotation [Rj] and a vector translation Tj . Thus, a point Pi of the detected vehicle is, at a time tj, projected by the camera 10 at a projection point 1-51j of the image Ij, such that: where K is a calibration matrix and [Rj Tj] is a positioning matrix. [0009] By convention, the position of the vehicle with respect to the camera is considered at time tO of taking the first image of the 2D image sequence as being the reference position, so that the positioning matrix at this time t0 is then the matrix [I 0]. Next is a so-called bundle adjustment method (see, for example, the article by Triggs Bill and all entitled "Bundle adjustment - a modern synthesis", published in Vision Algorithms: Theory Practice, Springer Berlin Heidelberg). , 2000, pages 298 to 372) which consists in considering several points Pi of different coordinates and hence in changing the values of the parameters of the calibration matrices [K] and of the positioning matrix [Rj and, for each set of values of parameters and coordinates of points Pi, to first determine using the relation above the projected points and then to compare them with the points actually observed on an image Ij and to retain only the points Pi and the parameter values of the positioning matrix [Rj Tj] and the calibration matrix [K] which maximize the match between the points 1-51j and the pu points, i.e. those which minimize the distances between these points. We can therefore write: (Pi, [Ri TiJK) optimization = argmin The resolution of this equation can be performed using a Levenberg-Marquardt non-linear least squares optimization algorithm. Advantageously, the beam adjustment method is implemented after an initialization phase of the intrinsic and extrinsic parameters as well as the coordinates of the points Pi of the 3D model of the detected vehicle, so as to avoid that it converges towards a solution. suboptimal while limiting the consumption of computing resources. The intrinsic parameters of the camera can for example be initialized by means of the information contained in its data sheet or obtained empirically, such as its focal ratio / pixel size for each of the axes of its sensor. Similarly, the main point can be considered as being in the center of the 2D image. The values contained in this information, without being precise, are suitable approximations. [0010] To initialize the extrinsic parameters, you can proceed as follows. We first determine from a certain number of correspondences established between points pij of the image Ij and points pi0 of the first image IO a so-called essential matrix E which satisfies the following relation: (-1pii) TE ( K-lpio) = 0 For more details on this process, we can refer to the book entitled "Multiple View Geometry in Computer Vision" by R. Hartley and A. Zisserman published by Cambridge University Press and in particular Chapter 11.7. 3. [0011] Then, from this essential matrix E, the matrices [Ri Ti] for the different times tj are calculated. For more details on this process, see chapter 9.6.2 of the same book mentioned above. Finally, for the initialization of the 3D coordinates of the points Pi, it is possible to use pairs of images Ij and Ij 'and point correspondences pij and pij' in these pairs of images. The camera considered here has for intrinsic and extrinsic parameters the above parameters estimated for initialization purposes. For more details on this process, see chapter 10 of the book mentioned above. At the end of this first substep E21, there is a 3D model of the detected vehicle defined at a scale factor close and non-aligned, that is to say a set of points Pi of this vehicle while in the reference position mentioned above (position at time t0). In a second substep E22, the 3D model of the detected vehicle is aligned with at least one predetermined vehicle 3D model {Qk}. In the second embodiment envisaged above in relation to FIG. 2b, the predetermined vehicle 3D model or models considered here are those that have been associated, in step E12, with the vehicle characteristic (s) recognized in the step El 1. For this alignment, the parameters of the vehicle are sought. a matrix geometric transformation [TG] which, applied to the set or to each set of 5 points Qk of the or each predetermined vehicle 3D model, makes it possible to find the set of points Pi forming the 3D model of the detected vehicle. The matrix [TG] can be decomposed into a scale change matrix [SM] and an alignment matrix [RM TM] where RM is a rotation matrix and TM is a translation vector. The scaling matrix [SM] is a 4x4 matrix that can be written as: [Sm] = 13 0 0 sm_ where sm is a scale ratio. If we have a second camera which is calibrated with respect to the first camera 10 (it will be noted that, in this case, since we consider cameras calibrated between them, we are only looking for the parameters extrinsic of the camera 10 with respect to the road), it is possible to establish from a single pair of images, by standard methods of stereoscopy, a model of the detected vehicle such that sm is equal to 1, but if we do not have such a second camera calibrated with respect to the first camera 10, we can proceed as follows. For a number of values of the 5M scale ratio, the alignment matrix [RM TM] is determined. To do this, we can use the algorithm ICP (iterative closest 25 point) which is described by Besl, Paul J. and Neil D. Mckay in an article entitled "Method for registration of 3-D shapes" and published in 1992 in "Robotics-DL attempt", International Society for Optics and Photonics. For each scale ratio value 5M, an alignment score s of good correspondence is established, for example equal to the number of points Pi which are at most a distance d of points Pk such that: 1113iPk <d with Pk = Sm [Rm Then the 5M scale ratio value is selected, and the corresponding values of the alignment matrix parameters [RM TM] which have obtained the best alignment score s. This is the best s score. If there are several predetermined vehicle 3D models, as previously for a single predetermined vehicle 3D model, this time for each predetermined vehicle 3D model, the best alignment score is determined and then retained. the predetermined vehicle 3D model that achieved the best score of best alignment. The predetermined vehicle 3D model selected corresponds to a vehicle model that can be recognized in this way. In a third substep E23, the extrinsic parameters of the camera are determined relative to the reference frame of the predetermined vehicle 3D model of the recognized vehicle. To do this, we proceed as follows. [0012] At each point pk0 of the 2D IO image delivered by the camera 10 at the time t0, there is a corresponding point Qk in the predetermined vehicle 3D model so that one can write: pk0 = KSM [RM TM] Qk = K [RM TM] Qk Thus, the matrix of extrinsic parameters of the camera relative to the predetermined vehicle 3D model of the recognized vehicle is the matrix: [RT] = [RM TM] We now describe in relation with FIG. 4 a second embodiment envisaged for the implementation of step E20 mentioned above in relation to FIGS. 2a and 2b. For this embodiment, each predetermined vehicle 3D model 200 of said set of predetermined vehicle 3D models for example stored in the database 51 is not only a predetermined 3D model of the actual vehicle 201, it is that is to say a set of points Qk, but also points pk of at least one reference 2D image 202 obtained by projection, by a real or virtual camera, also of reference, of the points Qk of the 3D model of predetermined vehicle 201. Thus, for each predetermined vehicle 3D model 200, the points pk of the or each reference image 202 are in correspondence with points Qk of said predetermined vehicle model 3D proper 201 (see arrow A). [0013] There is also a 2D image 100 actually taken by the camera 10 of the vehicle 30 detected in step E10 of the method of the invention (see Figs 2a and 2b). In a first substep E210, first matches are made between points p1 of the 2D image 100 of the detected vehicle 30 and points pk of the image or reference 2D image 202 (arrow B). then, in a second substep E220, correspondences between points p1 of the 2D image 100 of the detected vehicle 30 and points Qk of the predefined actual vehicle 3D model 201 considered (arrow C). As previously, in the second embodiment envisaged above in FIG. 2b, the predetermined vehicle 3D model (s) considered herein are those associated with, in step E12, the vehicle feature (s) that were recognized in step Ell. It is considered that each point pi of the 2D image 100 is the result of a transformation of a point Qk of the predetermined vehicle 3D model proper 201 of the detected vehicle. This transformation can be likened to a projection operated by the camera 10, an operation subsequently called pseudo-projection, and one can thus write: = [A] Q k where [A] is a 3x4 matrix then called pseudo-projection projection. [0014] If we have a sufficient number of correspondences (generally at least 6 correspondences), this relation translates an overdetermined linear system, that is to say of which it is possible to determine the coefficients of the matrix of pseudoprojection [A ]. This calculation of the matrix [A], performed in step E230, is described, for example, in chapter 7.1 of the book mentioned above. [0015] In the next step E240, the parameters thus determined of said pseudo-projection matrix [A] are deduced from the intrinsic and extrinsic parameters of said camera. The pseudo-projection matrix [A] can be written in the following factored form: [A] = K [RT] = [K [R] KT] where [R] is the matrix of the rotation of the camera 10 by relative to the predetermined vehicle 3D model of the recognized vehicle and T the translation vector of the camera with respect to the same predetermined vehicle 3D model. We write [B] the 3x3 sub-matrix to the left of the pseudo-projection matrix [A]. We have: [B] = K [R]. We can write: [B] [B] T = K [R] (K [R]) T = K [R] [R] TKT = KKT If we assume that the calibration matrix K is written in the form from relation (2) above, one can write, by developing KKT: au2 +1.102 UoVo u0 KKT 2 2 UoVo Ctv + Vo voo vo 1 The product [B] [B] T can be written in the form of these coefficients: [B] [B] T = [bu] with i, j = 1 to 3. From the knowledge of [B] [B] T = KKT obtained from the matrix [A], it is it is possible to calculate X (the parameter X is then equal to b33) and the coefficients of the calibration matrix K, then the parameters of the matrix [RT] = K-1 [A]. While the first embodiment (see Fig. 3) requires a sequence of at least two images, the second mode (Fig. 4) requires only one image but a more elaborate predetermined vehicle 3D model since associated with a reference 2D image. Once the intrinsic and extrinsic parameters of the camera have been determined relative to the reference point of the predetermined vehicle 3D models (the reference point is identical for all the 3D models) stored in the database 51 and, in passing, in front of the camera 10 of a vehicle 30 having outstanding features that can be recognized in step El 1, it is possible to determine a number of physical magnitudes. Thus, the present invention relates to a method for determining at least one physical quantity related to the positioning of a camera placed at the edge of a roadway. It includes (see Fig. 5) a step of determining El values of intrinsic parameters and extrinsic parameters of said camera 10 by implementing the self-determination method just described. It also comprises a step E2 of setting, from said parameter values, the positioning matrix of the camera [RT], then to a step E3, of calculating the matrix [R 'T'] of the inverse transformation. . [0016] Finally, it comprises a deduction step E4, from said positioning matrix [RT] and the inverse transformation matrix [R 'T'], of the or each of said physical magnitudes in the following manner: - the height h with respect to the road: h = T'y, - the lateral distance of the camera from the recognized vehicle: d = TX, - the direction of the road relative to the camera: 3rd column of the matrix R, - the equation of the plane of the road relative to the camera: rfinecolumn of matrix RIT y 0 -T - z Two quantities remain unknown. This is the longitudinal position relative to the road. It can nevertheless be established by means of landmarks along the road, such as kilometer markers. This is still the lateral position relative to the road (for example, the distance to the center of the nearest lane). It is possible to determine it from the passage of a single vehicle, but from the passage of several vehicles. Thus, for each vehicle, the lateral distance to this vehicle is calculated and the lowest lateral distance is selected as the distance to the center of the nearest lane. Statistical analyzes of the lateral distance to the camera of the vehicles passing in front of it, can be used to estimate the calibration of the tracks with respect to the camera. Then, it is possible to determine, for each vehicle passing in front of the camera, the number of the track on which it is. [0017] It is shown in FIG. A processing system 50 which is provided with a processing unit 52, a program memory 53, a data memory 54 including in particular the database 51 in which the predetermined vehicle 3D models are stored and an interface 55 allowing the connection of the camera 10, interconnected by a bus 56. The program memory 53 contains a computer program which, when it is unwound, implements the steps of the processes which are described previously. Thus, the processing system 50 contains means for acting according to these steps. As the case may be, it constitutes either a system for self-determination of intrinsic parameter values and extrinsic parameters of a camera placed at the edge of a roadway, or a system for determining at least one physical quantity related to the positioning of a camera placed at the edge of a roadway.
权利要求:
Claims (13) [0001] CLAIMS1) A method of self-determination of the values of intrinsic parameters and extrinsic parameters of a camera (10) placed at the edge of a roadway, characterized in that it comprises: a step El0 of detecting a vehicle passing in front of the camera, a determination step E20, based on at least one 2D image taken by the detected vehicle's camera and at least one predetermined vehicle 3D model, intrinsic and extrinsic parameters of the camera with respect to the reference mark; reference of the predetermined vehicle 3D model or models so that a projection of said one or of said predetermined vehicle 3D models corresponds to said one or more of the 2D images actually taken by said camera. [0002] 2) self-determination method according to claim 1, characterized in that it further comprises: - a step El 1 of recognition, from a 2D image or at least one image of a sequence of 2D images of at least one vehicle characteristic of a vehicle detected in step E10, a step E12 of association with the one or more vehicle characteristics recognized in step El 1, at least one predetermined vehicle 3D model. among a predetermined set of predetermined vehicle 3D models of different categories of vehicles, and in that the predetermined vehicle 3D model (s) that are considered in the determination step E20 are at least one predetermined vehicle 3D model that has been associated, in step E12, with the characteristic or characteristics recognized in step El 1. [0003] 3) self-determination method according to claim 1 or 2, characterized in that said determining step comprises: a substep E21 establishment, from at least two 2D images of said sequence of images, d a 3D model of the vehicle detected in step E10, an alignment sub-step E22 of the predetermined vehicle 3D model or models considered with the 3D model of the recognized vehicle, in order to determine the parameters of a geometric transformation which , applied to the model or 3D model of predetermined vehicle considered, gives the 3D model of the recognized vehicle, - a substep E23 deduction, from the parameters of said transformation, intrinsic and extrinsic parameters of said camera. 5 [0004] 4) self-determination method according to claim 3, characterized in that the alignment sub-step E22 consists in determining the parameters of said geometric transformation for different scale ratio values, to be established for each report value of Scale an alignment score and select the scale ratio value and the parameters of said alignment transformation that obtained the best alignment score. [0005] 5) A method of self-determination according to claim 3, characterized in that the alignment sub-step E22 consists in determining, for each 3D model of predetermined vehicle considered, the parameters of said geometric transformation for different values of the ratio of scale, to set for each scale ratio value an alignment score and to select the scale ratio value and the parameters of said alignment transformation that obtained the best alignment score, said best score aligning, then selecting the predetermined vehicle 3D model, the scale ratio value, and the parameters of said alignment transformation that obtained the best alignment score. [0006] A method of self-determination according to claim 1 or 2, characterized in that each predetermined vehicle 3D model (200) is comprised of: - the predetermined vehicle model 3D proper (201), and - at least one reference 2D image (202) obtained by projection, by a real or virtual camera, of points of said predetermined predetermined vehicle 3D model (201), and in that said method comprises: - a substep E210 d associating with points of a 2D image taken by the camera of the points of the reference 2D image of said predetermined vehicle 3D model considered, - a substep E220 of association with said points of the 2D image taken by the camera, points of the predetermined vehicle 3D model proper, - a substep E230 for determining the parameters of a pseudo-projection transformation which, applied to points of said 3D model proper, gives points of ima 2D ge taken by the camera, - a substep E240 deduction, from the parameters of said pseudo-projection transformation, intrinsic and extrinsic parameters of said camera. [0007] 7) Method for determining at least one physical quantity related to the positioning of a camera placed at the edge of a roadway, characterized in that it comprises: a step of determining the values of the intrinsic parameters and the extrinsic parameters of said camera by implementing the self-determination method according to one of the preceding claims, - a step of setting, from said parameter values, the camera positioning matrix, - a step of calculating the matrix of the inverse transformation, and - a deduction step, from said positioning matrix and the matrix of the inverse transformation, of the or each of said physical magnitudes, each physical quantity being one of the following quantities: the camera relative to the road, - the distance of said camera from the recognized vehicle, - the direction of the road relative to the camera, - the equation the road relative to the camera. 25 [0008] 8) A method for determining at least one physical quantity according to claim 7, characterized in that the physical quantity or magnitudes comprise the lateral position of the camera with respect to the road, determined from the passes of several vehicles, by calculating the lateral distance to each vehicle and selecting the lowest lateral distance. [0009] 9) System for self-determination of the values of intrinsic parameters and extrinsic parameters of a camera (10) placed at the edge of a roadway (20), characterized in that it comprises: - means for detecting a vehicle passing in front of the camera; means for determining, from at least one 2D image taken by the camera of the detected vehicle and at least one predetermined vehicle 3D model, intrinsic and extrinsic parameters of the camera with respect to the reference mark; reference of the predetermined vehicle 3D model (s) so that a projection of said one or of said predetermined vehicle 3D models corresponds to said one or more of the 2D images actually taken by said camera. [0010] 10) self-determination system according to claim 9, characterized in that it further comprises: - means for recognizing, from a 2D image or at least one image of a sequence of 2D images, to least one vehicle characteristic of a vehicle passing in front of the camera (10); means for associating with said one or more recognized vehicle characteristics of the detected vehicle at least one predetermined vehicle 3D model from a predetermined set of 3D models of different categories of vehicles, and in that the predetermined vehicle 3D model (s) which are considered by said determining means are at least one predetermined vehicle 3D model which has been associated with the recognized one or more vehicle characteristics. [0011] 11) System for determining at least one physical quantity related to the positioning of a camera placed at the edge of a roadway, characterized in that it comprises: means for determining the values of the intrinsic parameters and the extrinsic parameters of said camera by implementing the self-determination method according to one of claims 1 to 6, - means for establishing, from said parameter values, the positioning matrix of the camera, - means for calculating the matrix of the transformation inverse, and - means for deriving, from said positioning matrix and / or the inverse transformation matrix, the or each of said physical quantities, each physical quantity being one of the following quantities: the height of the camera in relation to the road, - the distance of said camera from the recognized vehicle, - the direction of the road relative to the camera, - the equation of the road by report to the camera. [0012] 12) computer program stored in a memory of a system according to claim 9 or 10 provided, when executed, for carrying out the self-determination method according to one of claims 1 to 6. [0013] 13) computer program stored in a memory of a system according to claim 11 provided, when executed, for carrying out the method of determining at least one physical variable according to claim 7 or 8.
类似技术:
公开号 | 公开日 | 专利标题 EP2975553A1|2016-01-20|Method and system for auto-determination of the values of the intrinsic and extrinsic parameters of a camera positioned by the side of a roadway EP2866202B1|2020-07-29|Processing of light fields by transforming to scale and depth space EP2884226A1|2015-06-17|Method for angle calibration of the position of a video camera on board an automotive vehicle JP5926228B2|2016-05-25|Depth detection method and system for autonomous vehicles CN108961327B|2021-03-30|Monocular depth estimation method and device, equipment and storage medium thereof FR2947657A1|2011-01-07|METHOD FOR DETECTING AN OBSTACLE FOR A MOTOR VEHICLE CN103177236A|2013-06-26|Method and device for detecting road regions and method and device for detecting separation lines FR3009635A1|2015-02-13|METHOD FOR SEARCHING A SIMILAR IMAGE IN A BANK OF IMAGES FROM A REFERENCE IMAGE CN108028023A|2018-05-11|Information processor, information processing method and program WO2021196941A1|2021-10-07|Method and apparatus for detecting three-dimensional target CN111639663A|2020-09-08|Method for multi-sensor data fusion CN112889071A|2021-06-01|System and method for determining depth information in two-dimensional images FR2962834A1|2012-01-20|METHOD OF DETECTING A TARGET IN STEREOSCOPIC IMAGES BY LEARNING AND STATISTICAL CLASSIFICATION FROM A LAW OF PROBABILITY. EP2937812B1|2018-06-06|System for locating a single vehicle in a plurality of areas different from one another through which said vehicle passes consecutively WO2020048285A1|2020-03-12|Estimating two-dimensional object bounding box information based on bird's-eye view point cloud JP2018088151A|2018-06-07|Boundary line estimating apparatus WO2018192023A1|2018-10-25|Method and device for hyperspectral remote sensing image classification EP2556467A1|2013-02-13|Method for detecting targets in stereoscopic images Kim et al.2018|MOD: Multi-camera based local position estimation for moving objects detection FR3083352A1|2020-01-03|METHOD AND DEVICE FOR FAST DETECTION OF REPETITIVE STRUCTURES IN THE IMAGE OF A ROAD SCENE Alkhorshid et al.2015|Camera-based lane marking detection for ADAS and autonomous driving FR2945491A1|2010-11-19|METHOD AND DEVICE FOR EXTENDING A VISIBILITY AREA FR2899363A1|2007-10-05|Movable/static object`s e.g. vehicle, movement detecting method for assisting parking of vehicle, involves carrying out inverse mapping transformation on each image of set of images of scene stored on charge coupled device recording camera WO2005010820A2|2005-02-03|Automated method and device for perception associated with determination and characterisation of borders and boundaries of an object of a space, contouring and applications Ikehata et al.2012|Confidence-based refinement of corrupted depth maps
同族专利:
公开号 | 公开日 US20160018212A1|2016-01-21| EP2975553A1|2016-01-20| FR3023949B1|2016-08-12|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 ES2758734T3|2009-05-05|2020-05-06|Kapsch Trafficcom Ag|Procedure to calibrate the image of a camera| FR2981185B1|2011-10-10|2014-02-14|Univ Blaise Pascal Clermont Ii|METHOD OF CALIBRATING A COMPUTER VISION SYSTEM ON A MOBILE| US9883163B2|2012-01-09|2018-01-30|Disney Enterprises, Inc.|Method and system for determining camera parameters from a long range gradient based on alignment differences in non-point image landmarks|WO2014169238A1|2013-04-11|2014-10-16|Digimarc Corporation|Methods for object recognition and related arrangements| US9489765B2|2013-11-18|2016-11-08|Nant Holdings Ip, Llc|Silhouette-based object and texture alignment, systems and methods| CN109344677B|2017-11-07|2021-01-15|长城汽车股份有限公司|Method, device, vehicle and storage medium for recognizing three-dimensional object| US10580164B2|2018-04-05|2020-03-03|Microsoft Technology Licensing, Llc|Automatic camera calibration| US10944900B1|2019-02-13|2021-03-09|Intelligent Security Systems Corporation|Systems, devices, and methods for enabling camera adjustments|
法律状态:
2015-06-25| PLFP| Fee payment|Year of fee payment: 2 | 2016-01-22| PLSC| Publication of the preliminary search report|Effective date: 20160122 | 2016-06-22| PLFP| Fee payment|Year of fee payment: 3 | 2017-06-21| PLFP| Fee payment|Year of fee payment: 4 | 2018-06-21| PLFP| Fee payment|Year of fee payment: 5 | 2020-06-23| PLFP| Fee payment|Year of fee payment: 7 | 2021-06-23| PLFP| Fee payment|Year of fee payment: 8 | 2021-07-02| CA| Change of address|Effective date: 20210526 | 2021-07-02| CD| Change of name or company name|Owner name: IDEMIA IDENTITY & SECURITY FRANCE, FR Effective date: 20210526 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1456767A|FR3023949B1|2014-07-15|2014-07-15|METHOD AND SYSTEM FOR AUTODETERMINING THE VALUES OF INTRINSIC PARAMETERS AND EXTRINSIC PARAMETERS OF A CAMERA PLACED AT THE EDGE OF A PAVEMENT.|FR1456767A| FR3023949B1|2014-07-15|2014-07-15|METHOD AND SYSTEM FOR AUTODETERMINING THE VALUES OF INTRINSIC PARAMETERS AND EXTRINSIC PARAMETERS OF A CAMERA PLACED AT THE EDGE OF A PAVEMENT.| EP15176000.6A| EP2975553A1|2014-07-15|2015-07-09|Method and system for auto-determination of the values of the intrinsic and extrinsic parameters of a camera positioned by the side of a roadway| US14/798,850| US20160018212A1|2014-07-15|2015-07-14|Method and system for automatically determining values of the intrinsic parameters and extrinsic parameters of a camera placed at the edge of a roadway| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|